The Slanted Edge Method 您所在的位置:网站首页 slanted line The Slanted Edge Method

The Slanted Edge Method

2023-03-10 05:46| 来源: 网络整理| 查看: 265

My preferred method for measuring the spatial resolution performance of photographic equipment these days is the slanted edge method.  It requires a minimum amount of additional effort compared to capturing and simply eye-balling a pinch, Siemens or other chart but it gives immensely more, useful, accurate, quantitative information in the language and units that have been used to characterize optical systems for over a century: it produces a good approximation to  the Modulation Transfer Function of the two dimensional camera/lens system impulse response – at the location of the edge in the direction perpendicular to it.

Much of what there is to know about an imaging system’s spatial resolution performance can be deduced by analyzing a system’s MTF curve, which represents the system’s ability to capture increasingly fine detail from the scene, starting from perceptually relevant metrics like MTF50, discussed a while back.  In fact the area under the curve weighted by some approximation of the Contrast Sensitivity Function of the Human Visual System is the basis for many other, better accepted single figure ‘sharpness‘ metrics with names like Subjective Quality Factor (SQF), Square Root Integral (SQRI), CMT Acutance, etc.   And all this simply from capturing the image of a slanted edge, which one can actually and somewhat easily do at home, as presented in the next article.

In the Beginning there was Frans

I was first introduced to the slanted edge method by Frans van den Bergh, author of excellent open source MTF Mapper, used extensively in these pages to produce linear spatial resolution  curves from raw data.  I highly recommend Frans’ program, documentation and blog.

I will assume that you know that the Modulation Transfer Function (MTF) represents the Spatial Frequency Response of a linear, space invariant imaging system that can be obtained by taking the magnitude (modulus) of the Fourier Transform of the system’s impulse response otherwise known as its Point Spread Function (PSF).  It allows us to determine a system’s ‘sharpness’ performance at all spatial frequencies in one go.

The terms MTF (Modulation Transfer Function) and SFR (Spatial Frequency Response) are often  used interchangeably in photography.  They both refer to the expected Michelson Contrast at the given linear spatial frequency, so in a way in imaging they can also be considered a Contrast Transfer Function.

Point to Line to Edge

In a nutshell, the method is based on the idea that it is difficult to obtain the PSF of a camera/lens system by, say, taking a capture of a single distant star, a POINT,  against a black sky because of the relative intensity and size of the target: it’s typically too small compared to the size of a pixel, too dim and too noisy.   Because the imaging system is assumed to be linear one could build intensity up and reduce noise by capturing a number of identical closely spaced stars aligned in a row (a straight LINE), then collapse the line into a unidimensional Line Spread Function to obtain the MTF in the direction perpendicular to the line.    Even better would be capturing a number of contiguous such lines of stars, which at this point could be considered a white EDGE against a darker sky.   Alas such constellations are hard to come by – but not to worry because we can make our own.  Here is what one looks like (228×146 raw pixels):

Figure 1. A slanted edge captured in a raw file.

Pretty simple, right?

Edge to Line to Modulation Transfer Function

The following picture is how Frans explains the slanted edge method: the raw data intensities of the two dimensional edge captured by our imaging system is projected onto a one dimensional line perpendicular to the edge, producing its intensity profile (the Edge Spread Function or ESF); the derivative of the ESF is the Line Spread Function (LSF, not shown below), which is then Fourier transformed into the Modulation Transfer Function (MTF) that we are after:

Edge Spread FunctionFigure 2. How the 2D raw capture of a slanted edge is converted to a 1D edge intensity profile perpendicular to it, the Edge Spread Function.  Image courtesy of Frans van den Bergh.

Wait, I thought you said that the MTF was obtained from the Point Spread Function, not the Line Spread Function?

Well, it turns out that the Fourier Transform of the 1-dimensional LSF is in fact equal to the Fourier Transform of the 2-dimensional PSF in the direction perpendicular to the edge, shown as the green arrow labeled ‘Edge Normal’ in the picture above.  So, as better explained further down, by applying the slanted-edge method we obtain a measurement of the 2D MTF of the imaging system as setup in just one direction, the direction perpendicular to the edge.

Such a Modulation Transfer Function in our context represents the ability of an imaging system as a whole to transfer linear spatial resolution information (i.e. subject detail) to the raw file – at the location of the edge and in the direction perpendicular to it.

The Edge is an Integral Part of the System

The edge is ideally perfectly ‘sharp’, straight and of good contrast.  One of the better such targets is a back-lit utility knife edge or razor blade.  However in practice it can be the shadow cast by a bridge on a satellite photo or a high quality print of a uniformly black rectangle on uniformly white matte paper.  Depending on the regularization method and abstract concepts like rational dependence, it may need to be tilted (slanted) ideally between 4 and 6 degrees off the vertical or horizontal and be at least 50 pixels long, 100 or more is better.  But not too long if we want to minimize the effects of lens distortion and are interested in the performance of the system in just a small spot: the resulting MTF is the localized average of the performance of the system along the length of the edge.  Even excellent lenses can vary significantly over the distance covered by a long edge.

The edge on its own was produced by a process that itself has an MTF.  This edge MTF gets multiplied with the MTF of the other imaging components (lens, filter stack, sensor) to obtain the system’s MTF.  Therefore the relative quality of the edge (i.e. how much it can approximate an ideal step function at the capturing distance) determines the upper limit on the measured MTF.  If one uses a lower definition edge, such as one printed at home, the results may be less sensitive but still have comparative value to other measurements taken with the same target.

Reconstituting the Continuous Intensity Profile

The advantage of the slanted-edge method is that it effectively super-samples the edge by the number of pixels along it.  Assuming the edge is perfectly straight  – not a given in some recent camera formats that rely on software to correct for poor lens distortion characteristics – if it is 200 pixels long then the edge profile is over-sampled two hundred times with great benefits in terms of effectively cancelling quantization and reducing the impact of noise and aliasing on spatial resolution measurement.  With the proper setup, this gets around the fact that a digital sensor is typically not space invariant  because of its edgy pixels and rectangular sampling grid.

Figure 3. Green channel raw data from the captured edge image projected onto the normal axis. The Edge Spread Function (orange line) is constructed based on this noisy profile, which can be considered to be continuous. Note the higher standard deviation with higher intensity, the result of shot noise and photon response non uniformities. This edge was 152 pixels long at an angle of 4.286 degrees.

There are many ways to construct the ESF from projected raw data (see the answer to Julio in the comments below for one based on constrained regression, resulting in the orange curve in Figure 3).  MTF Mapper collects it into bins 1/8th of a pixel wide, leaving plenty of latitude for later calculations in a photographic context.  Binning has the effect of regularizing the data while providing initial noise reduction.  The physical units on the normal axis are pixels – here short for pixel pitch, which is the horizontal or vertical distance between the centers of two contiguous pixels.

Assuming a perfect edge of good contrast, no distortion, low noise and excellent capture technique, after some massaging the resulting Edge Spread Function can be considered for all intents and purposes a one dimensional representation of the profile of the continuous light intensity about the edge after it has gone through optics and filters,  as detected by a very small pixel aperture usually assumed to be squarish.  Since for a, say, ideally printed and illuminated edge the transition from black to white on paper is a step function centered at zero pixels below, any degradation in the ESF from this ideal can be ascribed to loss of sharpness due to the imaging system:

Figure 4. Deviation from the ideal edge profile due to an imperfect imaging system (and target). From ESF to PSF Intensity Profile to MTF

The differential of the ESF so obtained is the Line Spread Function, which is directly proportional to the intensity profile of the system’s 2D Point Spread Function in the direction perpendicular to the edge/line.  Yes, that’s approximately the projection of the impulse response of the imaging system in that one direction, aka what the two-dimensional continuous intensity profile of a star would look like if viewed from the side.

Figure 5. The Line Spread Function (LSF) is computed as the differential of the measured Edge Spread Function (ESF). The LSF is the 1D projection of the 2D Point Spread Function (PSF) of the system as a whole onto the edge normal.  Curves produced by MTF Mapper.

If the imaging system and the edge were perfect the LSF would be an infinitely narrow vertical pulse at zero pixels (a delta function) – but in practice they never are, so the pulse is instead spread out as shown in Figure 5.

Mathematically, the LSF represents an approximate instance of the Radon Transform of the two dimensional system PSF onto the edge normal.  Note that the ‘bright’ side of the LSF is noisier than the ‘dark’ side in this base ISO capture as a result of shot noise and Photo Response Non Uniformities.  Differentiation amplifies noise.

By taking the magnitude (modulus) of the Fourier Transform of the one-dimensional LSF so derived we are able to determine fairly accurately* the combined Modulation Transfer Function of the edge, camera and lens at the position of the edge in the direction perpendicular to it.  The resulting curve tells us how good our equipment is at capturing all levels of detail (spatial resolution) from the scene in that one direction.

Mathematically we have applied the Fourier-Slice Theorem to the projection of the system’s PSF obtaining a radial slice through the two-dimensional MTF of the two-dimensional PSF (see the answer to Daniel’s question below for a little more detail on this interesting bit of theory).

Here is MTF Mapper’s output, the SFR computed from the green raw channels of a capture of a centrally located back-lit razor edge 4.61 degrees off vertical, therefore representative of spatial frequency performance in an approximately easterly direction (the 24MP FF camera used for the capture is effectively AAless in this direction in landscape orientation):

Figure 6. MTF/SFR produced by MTF Mapper from a back-lit razor edge captured by a Nikon D610 mounting a 50mm/1.8D at ISO 100 and f/5.6.

In this case the blue MTF curve of the raw green channels of a Nikon D610 mounting a 50mm:1.8D @ f/5.6 indicates that system performance is excellent for a 24MP sensor, with good contrast transfer at higher frequencies and a reference MTF50 value of about 1590 lp/ph (or 66.3 lp/mm – see this article for an explanation of how the units used in spatial resolution are related).

On the other hand the curve hits the grayscale Nyquist frequency with a lot of energy still, at an MTF of around 0.37 (aka MTF37), which does not bode well for aliasing in very fine detail in this direction.  The grayscale Nyquist limit refers to the maximum number of line pairs that can be represented accurately by the sensor, in this case about 2020 since the D610 has twice that many pixels on the short side.

Super-Sampling Lets us See Far

We are able to measure the imaging system’s frequency response above the Nyquist frequency because of the method’s super-sampling, which allows us to take a much closer look at an approximation of the continuous PSF profile of the system on the imaging plane than possible for example with visual charts.  For instance we can see the null caused by pixel aperture (width) just above 4000 lp/ph or about  170 lp/mm.

You can compare this MTF curve and reference values directly with those obtained by others with similarly good technique straight from raw green channels – a hard feat though, given the number of variables involved – to make an informed call as to which system is ‘sharper’ or whether an old lens has lost its touch.  With great care the method can provide absolute results but in most cases it is instead used to obtain relative results, with only one variable in the process varied at a time (for example  just the lens).  Keep in mind that in typical situations a difference of 10% in MTF50 values is noticeable, but a trained eye is needed to perceive a difference of 5% even when pixel peeping.

So how do we use MTF Mapper to obtain MTF curves?  Next.

 

*Slanted edge MTF results obtained with excellent technique should be comparable to those obtained with other methods.

**If you would like to learn more about how MTF measurements are derived and used to fine tune the optics in anything from sensors to orbiting satellites off slanted shadows created by bridges you will find this, this and this paper interesting.

 

 

Want to share?PrintEmailFacebookTwitterRedditMorePinterestTumblrWhatsAppLinkedInSkypePocketTelegram


【本文地址】

公司简介

联系我们

今日新闻

    推荐新闻

    专题文章
      CopyRight 2018-2019 实验室设备网 版权所有